Product Code Database
Example Keywords: the orange -hat $38-105
barcode-scavenger
   » » Wiki: Krylov Subspace
Tag Wiki 'Krylov Subspace'.
Tag

In , the order- r Krylov subspace generated by an n-by- n matrix A and a vector b of dimension n is the by the images of b under the first r powers of A (starting from A^0=I), that is,

(2025). 9780387303031, Springer.
\mathcal{K}_r(A,b) = \operatorname{span} \, \{ b, Ab, A^2b, \ldots, A^{r-1}b \}.


Background
The concept is named after Russian applied mathematician and naval engineer , who published a paper about the concept in 1931.


Properties
  • \mathcal{K}_r(A,b), A\,\mathcal{K}_r(A,b)\subset \mathcal{K}_{r+1}(A,b).
  • Let r_0 = \operatorname{dim} \operatorname{span} \, \{ b, Ab, A^2b, \ldots \}. Then \{ b, Ab, A^2b, \ldots, A^{r-1}b \} are linearly independent unless r>r_0, \mathcal{K}_r(A,b) \subset \mathcal{K}_{r_0}(A,b) for all r, and \operatorname{dim} \mathcal{K}_{r_0}(A,b) = r_0. So r_0 is the maximal dimension of the Krylov subspaces \mathcal{K}_r(A,b).
  • The maximal dimension satisfies r_0\leq 1 + \operatorname{rank} A and r_0 \leq n.
  • Consider \dim \operatorname{span} \, \{ I, A, A^2, \ldots \} = \deg\,p(A), where p(A) is the minimal polynomial of A. We have r_0\leq \deg\,p(A). Moreover, for any A, there exists a b for which this bound is tight, i.e. r_0 = \deg\,p(A).
  • \mathcal{K}_r(A,b) is a cyclic submodule generated by b of the torsion kx-module (k^n)^A, where k^n is the linear space on k.
  • k^n can be decomposed as the direct sum of Krylov subspaces.


Use
Krylov subspaces are used in algorithms for finding approximate solutions to high-dimensional linear algebra problems. Many linear dynamical system tests in , especially those related to and , involve checking the rank of the Krylov subspace. These tests are equivalent to finding the span of the associated with the system/output maps so the uncontrollable and unobservable subspaces are simply the orthogonal complement to the Krylov subspace.

Modern such as Arnoldi iteration can be used for finding one (or a few) eigenvalues of large or solving large systems of linear equations. They try to avoid matrix-matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. Starting with a vector b, one computes A b, then one multiplies that vector by A to find A^2 b and so on. All algorithms that work this way are referred to as Krylov subspace methods; they are among the most successful methods currently available in numerical linear algebra. These methods can be used in situations where there is an algorithm to compute the matrix-vector multiplication without there being an explicit representation of A, giving rise to Matrix-free methods.


Issues
Because the vectors usually soon become almost linearly dependent due to the properties of , methods relying on Krylov subspace frequently involve some orthogonalization scheme, such as Lanczos iteration for or Arnoldi iteration for more general matrices.


Existing methods
The best known Krylov subspace methods are the Conjugate gradient, IDR(s) (Induced dimension reduction), (generalized minimum residual), (biconjugate gradient stabilized), QMR (quasi minimal residual), (transpose-free QMR) and (minimal residual method).


See also


Further reading
  • (1993). 9783764328658, Birkhäuser Verlag.
  • (2025). 9780898715347, SIAM. .
  • Charles George Broyden and Maria Teresa Vespucci(2004): Krylov Solvers for Linear Algebraic Systems, Elsevier(Studies in Computational Mathematics 11), ISBN 0-444-51474-0.
  • (2025). 9783030552503, Springer International Publishing.
  • Iman Farahbakhsh: Krylov Subspace Methods with Application in Incompressible Fluid Flow Solvers, Wiley, (Sep., 2020).

Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs